有效的沟通需要适应与每个交流伙伴共享的特质共同基础。我们研究了这个问题的特别具有挑战性的实例化:流行的游戏dixit。我们将一轮dixit作为多代理图像参考游戏,在其中(训练有素的)扬声器模型描述了目标图像,以使一个(预审计的)侦听器模型可以从一组干扰器中正确识别它,但另一个听众无法识别它。为了适应这种设置,演讲者必须利用与不同听众共享的共同点的差异。我们表明,在这种对比性的多代理设置中,在剪辑视觉编码器和大型语言模型之间进行基于注意力的适配器会产生与上下文相关的自然语言专业化,而无需直接监督。在一系列受控的实验中,我们表明说话者可以根据各对不同听众的特质优势和劣势来适应。此外,我们显示了说话者专业化对看不见的现实世界数据的零拍传输。我们的实验为复杂的多方设置中的自适应沟通提供了一步,并突出了Dixit等游戏带来的有趣的研究挑战。我们希望我们的工作能够激发创造性的新方法,以适应预处理的模型。
translated by 谷歌翻译
视觉域的适应性(DA)试图将经过训练的模型转移到分发转移的未看到的,未标记的域,但是方法通常着重于适应卷积神经网络体系结构,并使用有监督的成像网表示。在这项工作中,我们将重点转移到将现代体系结构改编成对象识别的重点 - 越来越流行的视觉变压器(VIT)以及基于自我监督的学习(SSL)的现代预测。受到最新SSL方法的启发,该方法是基于通过掩盖或裁剪生成的部分图像输入的学习的 - 要么通过学习预测缺失的像素或学习代表性的不断增强来进行这种增强 - 我们提出了简单的两阶段适应性PACMAC自我监督VIT的算法。 PACMAC首先在汇总源和目标数据上执行内域SSL,以学习任务歧视性特征,然后探究该模型的预测一致性,这些歧视性的一致性是通过新的注意力条件掩盖策略生成的一组部分目标输入,以识别自我候选者的可靠候选者-训练。我们的简单方法导致对使用VIT和对标准对象识别基准的自我监督初始化的竞争方法的性能一致。可在https://github.com/virajprabhu/pacmac上找到代码
translated by 谷歌翻译
我们考虑多臂绷带(MAB)中最好的臂识别(Bai)问题的变体,其中有两组臂(源头和目标),目的是确定最佳目标臂,同时仅拉动源臂。在本文中,我们研究了设置的时候,尽管是未知的手段,但源和目标MAB实例之间存在已知的附加关系。我们展示了我们的框架如何涵盖一系列以前研究的纯粹探索问题,并且还捕获了新的问题。我们提出并理论上分析了LUCB风格的算法,以识别具有高概率的$ \ epsilon $ -optimal目标手臂。我们的理论分析强调了在典型的BAI设置中不会出现的这种转移学习问题的方面,但恢复了单个域Bai的Lucb算法作为特殊情况。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.
translated by 谷歌翻译
Neoplasms (NPs) and neurological diseases and disorders (NDDs) are amongst the major classes of diseases underlying deaths of a disproportionate number of people worldwide. To determine if there exist some distinctive features in the local wiring patterns of protein interactions emerging at the onset of a disease belonging to either of these two classes, we examined 112 and 175 protein interaction networks belonging to NPs and NDDs, respectively. Orbit usage profiles (OUPs) for each of these networks were enumerated by investigating the networks' local topology. 56 non-redundant OUPs (nrOUPs) were derived and used as network features for classification between these two disease classes. Four machine learning classifiers, namely, k-nearest neighbour (KNN), support vector machine (SVM), deep neural network (DNN), random forest (RF) were trained on these data. DNN obtained the greatest average AUPRC (0.988) among these classifiers. DNNs developed on node2vec and the proposed nrOUPs embeddings were compared using 5-fold cross validation on the basis of average values of the six of performance measures, viz., AUPRC, Accuracy, Sensitivity, Specificity, Precision and MCC. It was found that nrOUPs based classifier performed better in all of these six performance measures.
translated by 谷歌翻译
Customers are rapidly turning to social media for customer support. While brand agents on these platforms are motivated and well-intentioned to help and engage with customers, their efforts are often ignored if their initial response to the customer does not match a specific tone, style, or topic the customer is aiming to receive. The length of a conversation can reflect the effort and quality of the initial response made by a brand toward collaborating and helping consumers, even when the overall sentiment of the conversation might not be very positive. Thus, through this study, we aim to bridge this critical gap in the existing literature by analyzing language's content and stylistic aspects such as expressed empathy, psycho-linguistic features, dialogue tags, and metrics for quantifying personalization of the utterances that can influence the engagement of an interaction. This paper demonstrates that we can predict engagement using initial customer and brand posts.
translated by 谷歌翻译